Skip to content

Conversation

@aireenmei
Copy link
Collaborator

@aireenmei aireenmei commented Oct 22, 2025

Description

See the added paragraph in data_input_grain.md in this PR for description of this behavior.
Also in b/452377649

Tests

  • Test scaling up:
    • Start with 1x v6e-256, run 5 steps, save checkpoints on step=0 and step=3 (log)
    • Resume with 2x v6e-256 with expansion_factor_real_data=2, max_checkify=true (to make sure the PlaceHolderIterator's fake data are not used for training), restored the checkpoint from step=3, run until step=9 while saving checkpoints on step=6 and step=9 (log)
    • Inspect the checkpoints in base_output_directory. For all steps, the "iter" folder contains 64 checkpoints, because it always has 64 data loading hosts. While for the "items" folder, step 0 and step 3 have 64 processes, step 6 and 9 have 128 processes.
  • Test scaling down:
    • Start with 2x v6e-256, run 10 steps, save checkpoints on step 0, 3, 6, 9 (log)
    • Resume with 1x v6e-256, restore checkpoint from step=9 and run until step 15, saving checkpoints on step=12 and step=15 (log)
    • Inspect the checkpoints in base_output_directory. For all steps, "iter" folder contains 128 checkpoints. While for "items" folder steps 0-9 have 128 processes, step 12-15 have 64 processes.

Checklist

Before submitting this PR, please make sure (put X in square brackets):

  • I have performed a self-review of my code. For an optional AI review, add the gemini-review label.
  • I have necessary comments in my code, particularly in hard-to-understand areas.
  • I have run end-to-end tests tests and provided workload links above if applicable.
  • I have made or will make corresponding changes to the doc if needed, including adding new documentation pages to the relevant Table of Contents (toctree directive) as explained in our documentation.

@aireenmei aireenmei changed the title [WIP] Support scale down in grain iterator checkpoint [WIP] Support scale down when loading grain checkpoint Oct 23, 2025
@aireenmei aireenmei changed the title [WIP] Support scale down when loading grain checkpoint [WIP] Support chip count change when loading grain checkpoint Oct 28, 2025
@aireenmei aireenmei marked this pull request as ready for review October 29, 2025 17:04
@aireenmei aireenmei changed the title [WIP] Support chip count change when loading grain checkpoint Support chip count change when loading grain checkpoint Oct 29, 2025
@github-actions
Copy link

🤖 Hi @aireenmei, I've received your request, and I'm working on it now! You can track my progress in the logs for more details.

@RissyRan
Copy link
Collaborator

🤖 Hi @aireenmei, I've received your request, and I'm working on it now! You can track my progress in the logs for more details.

I see merge conflict. It's an known issue for not triggering gemini review.

Copy link
Collaborator

@RissyRan RissyRan left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks Aireen! Do you think we could get reviewed by Grain team as well?

@github-actions
Copy link

🤖 Hi @aireenmei, I've received your request, and I'm working on it now! You can track my progress in the logs for more details.

@github-actions
Copy link

🤖 I'm sorry @aireenmei, but I was unable to process your request. Please see the logs for more details.

@aireenmei aireenmei force-pushed the aireen/grain_ckpt_scale branch 6 times, most recently from 75702a5 to ec2c319 Compare October 31, 2025 17:26
@github-actions
Copy link

🤖 Hi @aireenmei, I've received your request, and I'm working on it now! You can track my progress in the logs for more details.

Copy link

@github-actions github-actions bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

📋 Review Summary

This pull request introduces a sophisticated feature to handle changes in chip count when resuming training with Grain checkpoints. The implementation is thorough, touching upon checkpointing, data loading, and configuration to support both scaling up and scaling down scenarios. The logic is well-structured, especially in the new GrainCheckpointHandler and the restore strategies in _restore_grain_iterator.

🔍 General Feedback

  • The addition of documentation in data_input_grain.md is very helpful for understanding this complex feature.
  • The code is well-organized, and the separation of concerns for handling different scaling scenarios is clear.
  • The changes to wait for the checkpoint manager to finish at the end of training in train.py and other trainer scripts are a good reliability improvement.

Overall, this is a solid contribution that adds significant flexibility to the training pipeline. I have one minor suggestion for improving clarity.

@aireenmei aireenmei force-pushed the aireen/grain_ckpt_scale branch 2 times, most recently from f555cfd to 89571b6 Compare November 1, 2025 00:15
@aireenmei aireenmei force-pushed the aireen/grain_ckpt_scale branch 2 times, most recently from a107cd2 to 2b552a4 Compare November 1, 2025 00:35
@aireenmei aireenmei force-pushed the aireen/grain_ckpt_scale branch from 2b552a4 to e7128bc Compare November 1, 2025 00:54
@copybara-service copybara-service bot merged commit 86937a7 into main Nov 2, 2025
32 of 33 checks passed
@copybara-service copybara-service bot deleted the aireen/grain_ckpt_scale branch November 2, 2025 04:57
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

6 participants